Continuous Optimization of Hyper-Parameters

نویسنده

  • Yoshua Bengio
چکیده

Many machine learning algorithms can be formulated as the minimization of a train ing criterion which involves training errors on each training example and some hyper parameters which are kept xed during this minimization When there is only a single hyper parameter one can easily explore how its value a ects a model selection criterion that is not the same as the training criterion and is used to select hyper parameters In this paper we present a methodology to select many hyper parameters that is based on the computation of the gradient of a model selection criterion with respect to the hyper parameters We rst consider the case of a training criterion that is quadratic in the parameters In that case the gradient of the selection criterion with respect to the hyper parameters is e ciently computed by back propagating through a Cholesky decomposition In the more general case we show that the implicit function theorem can be used to derive a formula for the hyper parameter gradient but this formula requires the computation of second derivatives of the training criterion

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continuous Hyper-parameter Learning for Support Vector Machines

In this paper, we address the problem of determining optimal hyper-parameters for support vector machines (SVMs). The standard way for solving the model selection problem is to use grid search. Grid search constitutes an exhaustive search over a pre-defined discretized set of possible parameter values and evaluating the cross-validation error until the best is found. We developed a bi-level opt...

متن کامل

تعیین ماشین‌های بردار پشتیبان بهینه در طبقه‌بندی تصاویر فرا طیفی بر مبنای الگوریتم ژنتیک

Hyper spectral remote sensing imagery, due to its rich source of spectral information provides an efficient tool for ground classifications in complex geographical areas with similar classes. Referring to robustness of Support Vector Machines (SVMs) in high dimensional space, they are efficient tool for classification of hyper spectral imagery. However, there are two optimization issues which s...

متن کامل

Maximum Likelihood-Based Online Adaptation of Hyper-Parameters in CMA-ES

The Covariance Matrix Adaptation Evolution Strategy (CMAES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called se...

متن کامل

Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators

This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximato...

متن کامل

Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control

Policy gradient methods in reinforcement learning have become increasingly prevalent for state-of-the-art performance in continuous control tasks. Novel methods typically benchmark against a few key algorithms such as deep deterministic policy gradients and trust region policy optimization. As such, it is important to present and use consistent baselines experiments. However, this can be diffic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000